Refactor pre-commit configuration and enhance development scripts#309
Refactor pre-commit configuration and enhance development scripts#309AnkushMalaker merged 5 commits intofeat/asr-progressfrom
Conversation
- Removed local hooks for Robot Framework tests and cleanup from `.pre-commit-config.yaml`, streamlining the pre-commit setup. - Updated `Makefile` to install pre-commit using the `uv` tool, improving dependency management. - Enhanced `restart.sh`, `start.sh`, `status.sh`, and `stop.sh` scripts to source a new `check_uv.sh` script for better environment validation. - Added new environment variables for Galileo observability in `.env.template`, improving observability setup. - Introduced OpenTelemetry initialization in `app_factory.py` for enhanced observability during application runtime.
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request introduces OpenTelemetry/Galileo observability integration, button-event handling for audio streams, keyword-anywhere command triggering, diarization support in transcription, and session buffer tracking. It also refactors cleanup logic, adds preflight UV checks to deployment scripts, normalizes quote styles, and updates test configurations. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
backends/advanced/src/advanced_omi_backend/services/audio_stream/producer.py (1)
315-324:⚠️ Potential issue | 🟡 MinorHardcoded
"deepgram"provider in the fallback buffer initialization.When
init_sessionis not called beforeadd_audio_chunk, the fallback path unconditionally sets"provider": "deepgram"regardless of the actual configured provider. If the service is later extended to support additional providers, this will silently store incorrect metadata for any session that bypassesinit_session.🔧 Proposed fix
- self.session_buffers[session_id] = { - "buffer": b"", - "chunk_count": 0, - "user_id": user_id, - "client_id": client_id, - "stream_name": stream_name, - "provider": "deepgram", - } + logger.warning( + f"⚠️ add_audio_chunk called for {session_id} without prior init_session; " + "provider metadata will be unknown" + ) + self.session_buffers[session_id] = { + "buffer": b"", + "chunk_count": 0, + "user_id": user_id, + "client_id": client_id, + "stream_name": stream_name, + "provider": "unknown", + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/audio_stream/producer.py` around lines 315 - 324, The fallback initialization in add_audio_chunk (the block that creates entries in self.session_buffers when session_id not found) hardcodes "provider": "deepgram"; change it to use the configured provider instead by reading the backend's provider config (e.g., self.provider or whatever config field is used by init_session/setup) so the entry mirrors the actual provider; update the fallback to set "provider" dynamically (use the same source init_session uses) and verify associated keys like "stream_name" and "client_id" are consistent with init_session's logic.backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py (2)
315-329:⚠️ Potential issue | 🔴 CriticalUnreachable dead code after
raise— debug logging and possibly intended post-response logic will never execute.Lines 322–329 sit inside the
except httpx.HTTPStatusErrorblock after theraise RuntimeError(…) from eon line 317. Python never reaches past araise, so this Deepgram debug-logging block is dead code.If this logging is meant to run on successful responses, it needs to be dedented to the method-body level (after the entire
try/except), between the exception handling and the "Extract normalized shape" section.🐛 Proposed fix
) from e - # DEBUG: Log Deepgram response structure - if "results" in data and "channels" in data.get("results", {}): - channels = data["results"]["channels"] - if channels and "alternatives" in channels[0]: - alt = channels[0]["alternatives"][0] - logger.debug( - f"DEBUG Registry: Deepgram alternative keys: {list(alt.keys())}" - ) # Extract normalized shape + # DEBUG: Log Deepgram response structure + if "results" in data and "channels" in data.get("results", {}): + channels = data["results"]["channels"] + if channels and "alternatives" in channels[0]: + alt = channels[0]["alternatives"][0] + logger.debug( + f"DEBUG Registry: Deepgram alternative keys: {list(alt.keys())}" + ) + text, words, segments = "", [], []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py` around lines 315 - 329, The Deepgram debug-logging block is unreachable because it comes after the raise in the except httpx.HTTPStatusError handler; move the debug block out of that except and place it after the try/except (i.e., where successful response processing occurs) so it runs on successful responses. Locate the except block handling httpx.HTTPStatusError (the raise RuntimeError referencing self._name and url) and dedent the DEBUG section that references data, channels, alt, and logger.debug so it executes after the exception handling and before the "Extract normalized shape" logic.
36-38:⚠️ Potential issue | 🟡 MinorStale docstring: documented output no longer matches behavior.
Line 45 now only lowercases terms, so the example output
"hey vivi Hey Vivi chronicle Chronicle omi Omi"is wrong. It should reflect the new behavior, e.g."hey vivi chronicle omi".📝 Suggested fix
- Output: "hey vivi Hey Vivi chronicle Chronicle omi Omi" + Output: "hey vivi chronicle omi"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py` around lines 36 - 38, The docstring in backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py is stale: its example Output shows title-cased tokens ("hey vivi Hey Vivi ...") but the current code now only lowercases terms; update the example Input/Output in the module docstring to reflect the real behavior (e.g. Input: "hey vivi\\nchronicle\\nomi" or "hey vivi, chronicle, omi" -> Output: "hey vivi chronicle omi") so the documented example matches the implementation in this transcription module.backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py (3)
544-545:⚠️ Potential issue | 🟡 MinorBare
except:clauses catchBaseExceptionBoth bare
except:blocks insideget_stream_infoinadvertently catchSystemExit,KeyboardInterrupt, andGeneratorExit, which can mask critical signals or suppress shutdown.🔧 Proposed fix
- except: + except Exception: value = str(value)Apply to both occurrences (lines ~544 and ~567).
Also applies to: 567-568
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py` around lines 544 - 545, In get_stream_info, replace the two bare "except:" blocks (around the value = str(value) lines) with "except Exception as e" to avoid catching BaseException subclasses like SystemExit/KeyboardInterrupt, and use the exception variable if needed (e.g., log or include e) while keeping the behavior of converting value to str; update both occurrences so only standard exceptions are caught.
907-970:⚠️ Potential issue | 🟠 MajorRedis connection leak — missing
try/finallyinget_redis_sessionsandclear_old_sessionsBoth functions create a Redis client that is never closed if an exception is raised before reaching the explicit
close()call. The outerexceptblock doesn't close the client, so any failure during the scan or session-processing loop leaks the connection.Additionally, the redis-py docs recommend
aclose()overclose()for async clients (see Lines 964 and 1020).🔧 Proposed fix for `get_redis_sessions`
redis_client = aioredis.from_url(REDIS_URL) - - # Get session keys - session_keys = [] - ... - await redis_client.close() - - return {"total_sessions": len(sessions), "sessions": sessions} + try: + # Get session keys + session_keys = [] + ... + return {"total_sessions": len(sessions), "sessions": sessions} + finally: + await redis_client.aclose()Apply the same
try/finallywrapper (andaclose()) toclear_old_sessionsaround lines 991–1020.Also applies to: 989-1020
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py` around lines 907 - 970, The get_redis_sessions and clear_old_sessions flows create an aioredis client and call close(), but if an exception occurs the client is never closed; wrap the Redis usage in a try/finally so the client is always cleaned up and replace close() with await redis_client.aclose(); specifically, after creating redis_client = aioredis.from_url(REDIS_URL) in get_redis_sessions and in clear_old_sessions, start a try: block containing the scan/processing logic and in finally: check that redis_client is not None and await redis_client.aclose() to guarantee the connection is closed even on exceptions.
1370-1397:⚠️ Potential issue | 🟡 MinorDead code —
recent_conversationsis always an empty list
recent_conversations = []is hard-coded on Line 1370 with no code path that populates it. Thefor conv in recent_conversations:loop (Lines 1381–1397) never executes, soconversations_listis always[]in the response. This appears to be a leftover from a prior implementation.Either wire in the actual data source or remove the loop and the
conversations_listkey from the return payload to avoid misleading API consumers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py` around lines 1370 - 1397, The variable recent_conversations is hard-coded to an empty list so the conversations_list loop never runs; either populate recent_conversations from the proper async result (e.g., pull the conversations payload from the parallel query results where applicable) and map each conv into the existing dict structure (using conv.conversation_id, conv.user_id, conv.created_at, conv.title, conv.summary, conv.get_active_transcript_text), or remove the unused recent_conversations and the conversations_list construction entirely and stop returning the conversations_list field in the API response to avoid sending an always-empty array.backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py (2)
129-144:⚠️ Potential issue | 🟡 Minor
NameErrorinfinallyif Redis connection failsIf
redis.from_url(line 69) raises beforepubsubandchannelare assigned, thefinallyblock references those unbound names. The innerexceptcatches the resultingNameError, but logs it as a misleading cleanup error rather than a plain no-op. Pre-initialize toNonebefore thetry:🛡️ Proposed fix
+ redis_client = None + pubsub = None + channel = None try: redis_client = await redis.from_url(redis_url, decode_responses=True) pubsub = redis_client.pubsub() channel = f"transcription:interim:{session_id}" ... except Exception as e: ... finally: try: - await pubsub.unsubscribe(channel) - await pubsub.close() - await redis_client.aclose() + if pubsub and channel: + await pubsub.unsubscribe(channel) + await pubsub.close() + if redis_client: + await redis_client.aclose() ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py` around lines 129 - 144, Pre-initialize the variables used in the cleanup path (pubsub, channel, redis_client) to None before the try so they are always defined (e.g., set pubsub = channel = redis_client = None), and in the finally block only call await pubsub.unsubscribe(...), await pubsub.close(), and await redis_client.aclose() if the respective variable is not None (and channel is truthy); update the websocket_controller cleanup logic to guard those awaits and avoid referencing unbound names so a failed redis.from_url no longer triggers a misleading cleanup NameError.
1406-1421:⚠️ Potential issue | 🟡 Minor
interim_subscriber_taskorphaned on duplicateaudio-startin OMI modeWhen a second
audio-startarrives while a session is already active,_initialize_streaming_sessionreturnsNone(line 460-462). This overwritesinterim_subscriber_taskwithNone, leaving the previously created subscriber task running but untracked._cleanup_websocket_connectionwill then skip cancelling it, causing a task leak.Guard the assignment to avoid overwriting an existing live task:
🛡️ Proposed fix
- interim_subscriber_task = await _initialize_streaming_session( + new_task = await _initialize_streaming_session( client_state, audio_stream_producer, user.user_id, user.email, client_id, header.get("data", {...}), websocket=ws, ) + if new_task is not None: + interim_subscriber_task = new_task🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py` around lines 1406 - 1421, The code currently overwrites interim_subscriber_task with None when _initialize_streaming_session returns None on duplicate "audio-start", orphaning the original task; change the call site to store the result in a temporary (e.g., new_task = await _initialize_streaming_session(...)) and only set interim_subscriber_task = new_task if new_task is not None, so the existing interim_subscriber_task remains tracked and will be cancelled in _cleanup_websocket_connection; reference _initialize_streaming_session, interim_subscriber_task, and _cleanup_websocket_connection when making this conditional assignment.
🧹 Nitpick comments (26)
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (3)
88-90: Remove extraneousfprefix — no placeholders in this string.Static analysis (Ruff F541) correctly flags this. Same issue recurs on Lines 1022, 1059, and 1071.
🧹 Proposed fix
- logger.info( - f"🎤 Speaker recognition disabled, using original speaker labels" - ) + logger.info( + "🎤 Speaker recognition disabled, using original speaker labels" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py` around lines 88 - 90, The logged message uses an unnecessary f-string prefix (no placeholders) which triggers Ruff F541; remove the leading f from the string literal in the logger.info call(s) inside transcription_jobs.py (the calls that currently log "🎤 Speaker recognition disabled, using original speaker labels" and the other similar messages flagged) so they become plain string arguments to logger.info; locate the logger.info(...) invocations in the transcription job code and edit each message to drop the f prefix.
341-343:logger.exceptionalready attaches the traceback —{e}in the message is redundant.Ruff TRY401 flags this. Using
logger.exception("⚠️ Error triggering transcript plugins in batch mode")(without{e}) is cleaner since the exception details are auto-appended.🧹 Proposed fix
- logger.exception( - f"⚠️ Error triggering transcript plugins in batch mode: {e}" - ) + logger.exception( + "⚠️ Error triggering transcript plugins in batch mode" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py` around lines 341 - 343, The logger.exception call currently interpolates the exception object into the message (logger.exception(f"⚠️ Error triggering transcript plugins in batch mode: {e}")), which is redundant because logger.exception already appends the traceback; update the call to remove the {e} interpolation and use logger.exception("⚠️ Error triggering transcript plugins in batch mode") so the traceback is still recorded without duplicating the exception text (modify the logger.exception invocation where variable e is referenced).
1018-1022: Remove extraneousfprefixes from strings with no placeholders (Lines 1022, 1059, 1071).Same Ruff F541 issue as Line 89.
🧹 Proposed fix for all three
- logger.info(f"✅ Session ended without speech (grace period expired)") + logger.info("✅ Session ended without speech (grace period expired)")- logger.error( - f"❌ Session failed - transcription service unavailable" - ) + logger.error( + "❌ Session failed - transcription service unavailable" + )- logger.error( - f"❌ Session failed - check transcription service configuration" - ) + logger.error( + "❌ Session failed - check transcription service configuration" + )Also applies to: 1058-1060, 1070-1072
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py` around lines 1018 - 1022, Several logger.info calls in transcription_jobs.py use f-strings with no placeholders (triggering Ruff F541); remove the unnecessary leading "f" so they are plain string literals. Locate the logger.info calls that reference session_closed_at and final_check_grace_period (the message "✅ Session ended without speech (grace period expired)") and the other similar messages later in the same function (the logger.info invocations near the checks using session_closed_at and any following "no speech" messages), and change them from f"... " to regular "..." strings.backends/advanced/src/advanced_omi_backend/services/audio_stream/producer.py (1)
142-151: Silentexcept Exception: passhides Redis/parse failures.If
hgetfails (connection error, timeout) orjson.loadsraises, the fallback to default values is completely silent. Operators lose visibility into why the end signal was emitted with potentially wrong audio format parameters, which could affect worker flush behavior.🔧 Proposed fix
- except Exception: - pass # Fall back to defaults + except Exception as exc: + logger.warning( + f"⚠️ Could not retrieve audio_format for {session_id}, " + f"falling back to defaults: {exc}" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/audio_stream/producer.py` around lines 142 - 151, The try/except around building session_key and parsing audio_format (the block that reads audio_format_raw via self.redis_client.hget, json.loads, and extracts sample_rate/channels/sample_width) currently swallows all exceptions; change it to catch Exception as e and log the failure (include session_id/session_key and the exception details) before falling back to defaults so Redis/parse errors are visible in logs; ensure the log call references the same symbols (session_id, session_key, audio_format_raw) and still allows the code to continue using the default sample_rate, channels, and sample_width.backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py (2)
210-213: Redundant list entries after.strip().
v.strip().lower()can never produce a string with trailing whitespace, so"token "and"bearer "in the check list are dead entries. Simplify to["token", "bearer"]to avoid confusion.♻️ Suggested simplification
- if k.lower() == "authorization" and ( - not v - or v.strip().lower() in ["token", "token ", "bearer", "bearer "] - ): + if k.lower() == "authorization" and ( + not v + or v.strip().lower() in ("token", "bearer") + ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py` around lines 210 - 213, The conditional that checks an Authorization header uses redundant entries with trailing spaces because v.strip().lower() can never include trailing whitespace; update the check inside the Authorization branch (where k.lower() == "authorization" and v is tested) to replace the list ["token", "token ", "bearer", "bearer "] with the simplified ["token", "bearer"] so the condition reads something like checking v.strip().lower() in ["token", "bearer"].
614-616:provider_nameparameter is accepted but never used.Callers passing a
provider_namemay expect it to influence provider selection, but it's silently ignored. Consider removing it (breaking change) or adding a deprecation warning, or prefixing with_to signal it's intentionally unused.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py` around lines 614 - 616, The parameter provider_name on get_transcription_provider is accepted but never used; to avoid confusion, rename it to _provider_name (to signal intentional unused) and add a runtime deprecation warning when a non-None value is passed: import warnings and call warnings.warn(...) with DeprecationWarning inside get_transcription_provider, explaining that provider_name is ignored and will be removed in a future release; keep existing logic unchanged and reference get_transcription_provider and the renamed _provider_name so callers see the intent.backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py (3)
212-213: UnusedConversationimport in two functions
from advanced_omi_backend.models.conversation import Conversationis imported inside bothget_jobs_by_client(Line 213) andfetch_client_jobsinsideget_dashboard_data(Line 1195), butConversationis never referenced in either function body. Safe to remove both.Also applies to: 1191-1195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py` around lines 212 - 213, Remove the unused local import of Conversation in both functions: inside get_jobs_by_client and inside the nested fetch_client_jobs within get_dashboard_data; locate the lines that read "from advanced_omi_backend.models.conversation import Conversation" in those functions and delete them (leaving the existing get_queue import intact) so no unused Conversation import remains.
192-194: Exception chaining andlogging.exceptionmissing acrossexceptblocksRuff flags multiple patterns throughout the file (B904, TRY400):
raise ... from e— sixraise HTTPException(...)calls insideexceptblocks (Lines 192–194, 344–346, 486–488, 893–895, 1026–1028, 1416–1418) suppress the original exception's traceback. Addfrom eto all of them, e.g.:- raise HTTPException( - status_code=404, detail=f"Job not found or could not be canceled: {str(e)}" - ) + raise HTTPException( + status_code=404, detail=f"Job not found or could not be canceled: {e!s}" + ) from e
logger.exceptionvslogger.error—logger.error(...)insideexceptblocks (e.g., Lines 191, 343, 446, 485, 604, 834, 892, 962) should belogger.exception(...)so the traceback is automatically captured without passingexc_info=True.Also applies to: 344-346, 486-488, 893-895, 1026-1028, 1416-1418
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py` around lines 192 - 194, Update all except blocks in queue_routes.py that currently log with logger.error(...) and re-raise HTTPException(...) to instead call logger.exception(...) and re-raise the HTTPException using exception chaining (raise HTTPException(...) from e). Specifically, replace logger.error(...) calls in the same except handlers as the six re-raises and change each "raise HTTPException(...)" to "raise HTTPException(...) from e" so the original traceback is preserved; keep the caught exception variable name (e) consistent. This applies to the handlers that contain the HTTPException raises around the blocks shown (the six occurrences) and the corresponding logger calls in those except blocks.
1067-1163: Duplicate client-job-fetching logic betweenget_jobs_by_clientandfetch_client_jobs
fetch_client_jobs(Lines 1191–1308) insideget_dashboard_datareimplements most ofget_jobs_by_client(Lines 197–346) with two silent behavioural differences:
- It only checks
queued / started / finished / failedregistries, omittingcanceled / deferred / scheduledthatget_jobs_by_clientcovers.- It does not recurse over dependent jobs (
process_job_and_dependents), so the dashboard view for an expanded client will be incomplete.
get_dashboard_dataalready calls other endpoint logic (e.g.,session_controller.get_streaming_status). Consider extracting the shared lookup into a reusable helper, or calling the existingget_jobs_by_clientlogic directly, to keep both views consistent.Also applies to: 1191-1308
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py` around lines 1067 - 1163, Duplicate job-fetching logic: fetch_client_jobs inside get_dashboard_data reimplements get_jobs_by_client but omits cancelled/deferred/scheduled registries and dependent-job recursion; refactor to reuse the existing logic. Replace the inline fetch_client_jobs implementation with a call to get_jobs_by_client (or extract the shared lookup into a helper used by both) so the dashboard uses the same registry set (queued/started/finished/failed/canceled/deferred/scheduled) and invokes process_job_and_dependents for each root job to include dependent jobs; ensure permission checks (current_user.is_superuser) and redis Job.fetch usage remain intact and propagate meta/fields the same way as get_jobs_by_client.wizard.sh (1)
2-2: Samesourcefailure guard needed as other scriptsSame issue as
status.sh— withoutset -eor|| exit 1, a failingcheck_uv.sh(viareturnor missing file) won't abort the script.🔧 Suggested fix
-source "$(dirname "$0")/scripts/check_uv.sh" +source "$(dirname "$0")/scripts/check_uv.sh" || exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wizard.sh` at line 2, The script currently sources check_uv.sh without a failure guard, so a failing or missing check_uv.sh won't abort wizard.sh; update wizard.sh to guard the source call (or enable strict failure) — either add a top-level set -e in wizard.sh or change the source invocation of scripts/check_uv.sh to include a failure check (e.g., source "$(dirname "$0")/scripts/check_uv.sh" || exit 1) so that a failing check_uv.sh causes wizard.sh to exit; locate the source line in wizard.sh and apply one of these guards.wizard.py (2)
384-393: Unusedresultassignment; move success print toelseblock (ruff F841, TRY300)With
check=True,resultis only assigned when the subprocess succeeds, but it's never read. The success message is also more idiomatically placed in anelseclause.🔧 Proposed fix
- result = subprocess.run( + subprocess.run( cmd, cwd=service["path"], check=True, timeout=300, # 5 minute timeout for service setup ) - - console.print(f"✅ {service_name} setup completed") - return True - - except FileNotFoundError as e: + except FileNotFoundError as e:Then add the success message in an
elseblock:except Exception as e: console.print(f"❌ {service_name} setup failed: {e}") return False + else: + console.print(f"✅ {service_name} setup completed") + return True🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wizard.py` around lines 384 - 393, The subprocess.run call assigns an unused variable result and the success message should be in a try/else pattern; update the try block that calls subprocess.run(cmd, cwd=service["path"], check=True, timeout=300) to not assign the unused result (or assign to _), and move console.print(f"✅ {service_name} setup completed") and the return True into an else: clause paired with that try so the success path executes only when no exception is raised (keep existing except handlers unchanged).
399-401: Multiplef-strings have no placeholders — remove extraneousfprefix (ruff F541)Ruff flags the following lines (among others) as f-strings without any interpolation:
- Lines 401, 406, 407 (in
FileNotFoundError/TimeoutExpiredhandlers)- Lines 414, 415 (in
CalledProcessErrorhandler)- Line 738 (
"\n🔊 [bold cyan]Streaming[/bold cyan]")- Lines 1183, 1207, 1210, 1212, 1215, 1218 (in
main())🔧 Example fixes (apply the same pattern to all flagged lines)
- console.print(f"[yellow] And that 'uv' is installed and on your PATH[/yellow]") + console.print("[yellow] And that 'uv' is installed and on your PATH[/yellow]") - console.print(f"[yellow] Configuration may be partially written.[/yellow]") + console.print("[yellow] Configuration may be partially written.[/yellow]") - console.print(f"[yellow] To retry just this service:[/yellow]") + console.print("[yellow] To retry just this service:[/yellow]") # … same pattern for lines 414, 415, 738, 1183, 1207, 1210, 1212, 1215, 1218🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wizard.py` around lines 399 - 401, Several print statements and exception handlers use f-strings without placeholders (ruff F541); locate the console.print calls and exception blocks that mention FileNotFoundError, TimeoutExpired, and CalledProcessError as well as the main() prints (e.g., the "\n🔊 [bold cyan]Streaming[/bold cyan]" literal) and remove the unnecessary leading "f" from those string literals so they are plain strings instead of f-strings; update every occurrence noted in the review (the console.print in the uv PATH message, the exception handlers around lines referencing FileNotFoundError/TimeoutExpired/CalledProcessError, and the main() output lines) to eliminate the extraneous f prefix.Makefile (1)
125-131:pre-commitmay not be onPATHimmediately afteruv tool install
uv tool install pre-commitplaces the binary inuv's tool bin directory (typically~/.local/binor$UV_TOOL_BIN_DIR). If that directory isn't on the user'sPATH, the subsequent@pre-commit installlines will fail with "command not found". Usinguvx(oruv tool run) sidesteps the PATH dependency entirely.🔧 Suggested alternative
- `@uv` tool install pre-commit + `@uv` tool install pre-commit --quiet `@echo` "" `@echo` "🔧 Installing git hooks..." - `@pre-commit` install --hook-type pre-push - `@pre-commit` install --hook-type pre-commit + `@uv` tool run pre-commit install --hook-type pre-push + `@uv` tool run pre-commit install --hook-type pre-commit🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 125 - 131, The Makefile runs `uv tool install pre-commit` then calls `pre-commit install` which can fail if the uv tool bin dir isn't on PATH; update the Makefile so you invoke the installed tool via uv's runner (e.g., `uvx` or `uv tool run`) instead of calling `pre-commit` directly. Replace the two `@pre-commit install --hook-type ...` invocations with calls that use `uvx`/`uv tool run` (or the uv-provided runner) to run `pre-commit install` so the binary is executed from uv's tool directory regardless of PATH. Ensure both hook-type installs (pre-push and pre-commit) are updated to use the same approach and keep the surrounding echo/verbosity behavior unchanged.stop.sh (1)
2-2: Samesourcefailure guard needed as other scripts🔧 Suggested fix
-source "$(dirname "$0")/scripts/check_uv.sh" +source "$(dirname "$0")/scripts/check_uv.sh" || exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stop.sh` at line 2, The script uses the bare source invocation source "$(dirname "$0")/scripts/check_uv.sh" without a failure guard; update that line to guard against missing or failing include by either checking the file exists before sourcing or appending a fallback exit on failure (e.g., test -f "$(dirname "$0")/scripts/check_uv.sh" && source ... || { echo "Failed to load check_uv.sh"; exit 1; }), ensuring the sourced command string ("$(dirname \"$0\")/scripts/check_uv.sh") is the same symbol you modify.restart.sh (1)
2-2: Samesourcefailure guard needed as other scripts🔧 Suggested fix
-source "$(dirname "$0")/scripts/check_uv.sh" +source "$(dirname "$0")/scripts/check_uv.sh" || exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@restart.sh` at line 2, The source invocation for scripts/check_uv.sh in restart.sh should be guarded like the other scripts: first verify the file exists and is readable, then source it and if sourcing returns a non-zero status emit a clear error and exit non-zero; update the line that calls source "$(dirname "$0")/scripts/check_uv.sh" to perform these checks and fail-fast on missing/unreadable file or on a non-zero return from the sourced script.start.sh (1)
2-2: Samesourcefailure guard needed as other scriptsWithout
set -eor|| exit 1, a non-zero return fromcheck_uv.shsilently falls through touv run.🔧 Suggested fix
-source "$(dirname "$0")/scripts/check_uv.sh" +source "$(dirname "$0")/scripts/check_uv.sh" || exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@start.sh` at line 2, The sourced check script can return non-zero but currently allows execution to continue to uv run; make the start.sh invocation of check_uv.sh fail-fast so a non-zero exit aborts start.sh (either add a top-level set -e in start.sh or change the source line that calls check_uv.sh to enforce exit on failure, e.g. ensure the result of source "$(dirname "$0")/scripts/check_uv.sh" triggers an immediate exit when non-zero) and keep the uv run invocation guarded so it never runs if check_uv.sh fails.backends/advanced/.env.template (1)
65-70: SwapGALILEO_PROJECTandGALILEO_LOG_STREAMto restore alphabetical key ordering.
dotenv-linterflagsGALILEO_LOG_STREAM(line 68) as appearing afterGALILEO_PROJECT(line 67), violating the section's key order convention.🔧 Proposed fix
# Galileo (OTEL-based LLM observability) GALILEO_API_KEY= -GALILEO_PROJECT=chronicle GALILEO_LOG_STREAM=default +GALILEO_PROJECT=chronicle # GALILEO_CONSOLE_URL=https://app.galileo.ai # Default; override for self-hosted🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/.env.template` around lines 65 - 70, Swap the two environment keys so they are alphabetically ordered: place GALILEO_LOG_STREAM before GALILEO_PROJECT in the Galileo (OTEL-based LLM observability) block; update the sequence involving GALILEO_API_KEY, GALILEO_LOG_STREAM, GALILEO_PROJECT (and keep GALILEO_CONSOLE_URL commented as-is) to satisfy dotenv-linter key ordering rules.backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py (1)
628-645:send_button_eventhas noaudio_startedguard.
send_chunks_from_filecheckssession.audio_startedbefore proceeding, butsend_button_eventhas no equivalent guard. If the button event is sent beforeaudio-startis acknowledged (e.g., immediately afterstart_streamwith a very tight race), the server may not have a session context yet.Consider adding:
if not session.audio_started: raise RuntimeError("Stream not started; cannot send button event")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py` around lines 628 - 645, send_button_event must guard against sending before audio is acknowledged: inside send_button_event, after retrieving session via self._sessions.get(stream_id) and before scheduling the coroutine, check session.audio_started (the same guard used in send_chunks_from_file) and raise a RuntimeError like "Stream not started; cannot send button event" if false; this prevents races when start_stream returns before the server has established session context and mirrors the protection already implemented in send_chunks_from_file.backends/advanced/src/advanced_omi_backend/prompt_registry.py (1)
97-100: Silentexcept: passmasks real LangFuse errors during seeding.If
client.get_prompt()fails due to a network timeout or auth error (not just "not found"), the code silently falls through and attemptscreate_prompt, which will then also fail — losing the root cause. Log the swallowed exception atDEBUGlevel so it's diagnosable without polluting startup logs.🔧 Proposed fix
- try: - existing = client.get_prompt(prompt_id) - except Exception: - pass # Prompt doesn't exist yet + try: + existing = client.get_prompt(prompt_id) + except Exception as fetch_exc: + logger.debug( + f"Prompt '{prompt_id}' not found in LangFuse (will create): {fetch_exc}" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/prompt_registry.py` around lines 97 - 100, The try/except around client.get_prompt(prompt_id) currently swallows all exceptions; change the except to capture the exception (except Exception as e) and log it at DEBUG with context (e.g., include prompt_id and the exception) before continuing so real LangFuse/network/auth errors are diagnosable; keep the control flow that falls through to client.create_prompt if appropriate, but ensure the logged message uses the module's logger (or existing logger variable) and includes exc_info/stack trace for troubleshooting.backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py (2)
1534-1534: Unpack_for unusedpayloadin PCM control modeRuff RUF059:
payloadis unpacked fromparse_wyoming_protocolbut never used in the control-mode branch. Rename it to signal intentional discard:- header, payload = await parse_wyoming_protocol(ws) + header, _ = await parse_wyoming_protocol(ws)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py` at line 1534, The unpacked value named payload from parse_wyoming_protocol is unused in the PCM control-mode branch; rename it to _ (or _payload) to signal intentional discard—update the unpacking line where header, payload = await parse_wyoming_protocol(ws) in the websocket_controller's PCM control-mode handling to header, _ = await parse_wyoming_protocol(ws) (or header, _payload) so linters like Ruff RUF059 no longer flag an unused variable.
547-549: Unuseduser_id/user_emailparameters (Ruff ARG001)
_finalize_streaming_session(line 548): bothuser_idanduser_emailare accepted but never referenced in the body._process_rolling_batch(line 1077):user_emailunused._process_batch_audio_complete(line 1176):user_emailunused.Remove the unused parameters from each signature and all call sites to keep the API surface clean.
Also applies to: 1077-1078, 1176-1177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py` around lines 547 - 549, Remove the unused user_id and user_email parameters from the function signatures of _finalize_streaming_session and remove user_email from _process_rolling_batch and _process_batch_audio_complete; then update every call site to stop passing those arguments (search for invocations of _finalize_streaming_session, _process_rolling_batch, and _process_batch_audio_complete and remove the corresponding arguments), and adjust any type hints/docstrings accordingly so the functions and their callers remain consistent.backends/advanced/src/advanced_omi_backend/workers/rq_worker_entry.py (1)
26-31: Silentexcept Exception: passobscures OTEL init failures (Ruff S110)
init_otelalready guards its ownImportErrorand generic exceptions internally. The only errors that escape to this outerexceptare unexpected module-load failures. Swallowing them withpassmakes diagnosis impossible. Alogger.debugwithexc_info=Truecosts nothing at INFO log level and keeps workers unblocked:♻️ Proposed fix
try: from advanced_omi_backend.observability.otel_setup import init_otel init_otel() - except Exception: - pass # Optional — don't block workers + except Exception: + logger.debug("OTEL initialization skipped", exc_info=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/workers/rq_worker_entry.py` around lines 26 - 31, The outer try/except around init_otel currently swallows all exceptions with "except Exception: pass"; change it to log the unexpected exception instead of silencing it: import or use the module logger and in the except block call logger.debug("init_otel failed (non-fatal)", exc_info=True) (or an existing logger variable) so unexpected module-load failures from init_otel are recorded while keeping workers unblocked; reference the init_otel import/initialization and replace the bare except in that block.backends/advanced/src/advanced_omi_backend/observability/otel_setup.py (3)
10-13:lru_cachemakesis_galileo_enabledpermanently sticky.The cached result never invalidates, so if the env var is set after first invocation (e.g., in tests), the check returns stale data. Acceptable for production, but consider adding a note or a
_resethelper for test harnesses.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/observability/otel_setup.py` around lines 10 - 13, The is_galileo_enabled function is decorated with `@lru_cache`(maxsize=1) which makes its result permanent and causes stale behavior if GALILEO_API_KEY is set after the first call (common in tests); remove the `@lru_cache` decorator or replace it with a controlled caching pattern and add a companion reset helper (e.g., is_galileo_enabled_reset or clear_is_galileo_cache) so tests can invalidate the cached value, and update the function docstring to note caching behavior if you keep caching.
104-105: Uselogger.exceptionto preserve the traceback.
logger.error(f"Failed to initialize OTEL: {e}")loses the stack trace, making debugging harder. Since this is a catch-allexcept Exception, the traceback is especially valuable.Proposed fix
except Exception as e: - logger.error(f"Failed to initialize OTEL: {e}") + logger.exception(f"Failed to initialize OTEL: {e}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/observability/otel_setup.py` around lines 104 - 105, In the except Exception block in otel_setup.py (the handler that currently does "except Exception as e" and calls logger.error), replace the logger.error call with logger.exception so the full traceback is preserved; locate the except block in the OTEL initialization function (where logger is used) and change the logging call to use logger.exception("Failed to initialize OTEL") while keeping the exception variable to ensure the stack trace is emitted.
16-43:_session_tokenglobal is not safe under concurrent async tasks.If multiple async tasks (or threads) call
set_galileo_sessionconcurrently, the global_session_tokenis overwritten andclear_galileo_sessioncan only reset the last one. Since the current usage is in RQ workers (sequential per process), this isn't an active bug, but it will bite if this is ever called from concurrent FastAPI request handlers or async tasks.Consider using a
contextvars.ContextVarto store the token, or document the single-task assumption.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/observability/otel_setup.py` around lines 16 - 43, The global _session_token used by set_galileo_session and clear_galileo_session is unsafe for concurrent async tasks because multiple calls can overwrite the token; replace the module-level global with a context-local storage (use contextvars.ContextVar) to store the session token per execution context, update set_galileo_session to set the ContextVar and clear_galileo_session to reset it via the token returned by ContextVar.set (or use ContextVar.get/clear semantics), and keep the ImportError guards and is_galileo_enabled() check intact so behavior is unchanged for single-task RQ workers.backends/advanced/src/advanced_omi_backend/plugins/router.py (1)
141-145: Mutable default inConditionResultNamedTuple.
extra: Dict[str, Any] = {}uses a shared mutable default. Current usage never mutates it in-place, so this is safe today, but it's a latent footgun. Not blocking, just noting the risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/plugins/router.py` around lines 141 - 145, ConditionResult's field extra uses a shared mutable default ({}) which is a latent bug; change the type to Optional[Dict[str, Any]] with a default of None (e.g., extra: Optional[Dict[str, Any]] = None) and update any consumers of ConditionResult (places that reference ConditionResult.extra) to treat None as an empty dict (e.g., extra or {} or copy if mutation is needed). This avoids a shared mutable default while keeping existing semantics.
| async def send_button_event(self, button_state: str = "SINGLE_PRESS") -> None: | ||
| """Send button event via Wyoming protocol. | ||
|
|
||
| Args: | ||
| button_state: Button state ("SINGLE_PRESS" or "DOUBLE_PRESS") | ||
| """ | ||
| if not self.ws: | ||
| raise RuntimeError("Not connected. Call connect() first.") | ||
|
|
||
| header = {"type": "button-event", "data": {"state": button_state}} | ||
| await self.ws.send(json.dumps(header) + "\n") | ||
| logger.info(f"Sent button event: {button_state}") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for button-event handler and all button state references
rg -n "button.event|button_state|SINGLE_PRESS|SINGLE_TAP|DOUBLE_PRESS|DOUBLE_TAP" --type py -C2Repository: SimpleOpenSoftware/chronicle
Length of output: 32549
🏁 Script executed:
# Check the file structure to understand the layout
git ls-files | grep -E "(backend|audio|stream)" | head -20Repository: SimpleOpenSoftware/chronicle
Length of output: 782
🏁 Script executed:
# Examine the StreamManager.send_button_event to see how it calls AudioStreamClient
rg -n "class StreamManager" -A 50 backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py | grep -A 5 "send_button_event"Repository: SimpleOpenSoftware/chronicle
Length of output: 54
🏁 Script executed:
# Check the test library implementation
cat -n tests/libs/audio_stream_library.py | sed -n '145,160p'Repository: SimpleOpenSoftware/chronicle
Length of output: 752
🏁 Script executed:
# Verify ButtonState enum and its usage in the server handler
cat -n backends/advanced/src/advanced_omi_backend/plugins/events.py | sed -n '42,54p'Repository: SimpleOpenSoftware/chronicle
Length of output: 537
🏁 Script executed:
# Examine the server handler to see exact error handling
cat -n backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py | sed -n '1010,1060p'Repository: SimpleOpenSoftware/chronicle
Length of output: 2027
AudioStreamClient.send_button_event uses wrong button state strings — docstring and default don't match server expectations.
AudioStreamClient.send_button_event (line 233) defaults to "SINGLE_PRESS" and documents "SINGLE_PRESS" / "DOUBLE_PRESS", but the server's button-event handler (websocket_controller.py line 1023) explicitly expects "SINGLE_TAP" / "DOUBLE_TAP". The server maps button states via ButtonState enum (plugins/events.py line 50), which only defines SINGLE_TAP and DOUBLE_TAP. If an unrecognized state like "SINGLE_PRESS" is received, ButtonState(button_state) raises ValueError (line 1050), the handler logs a warning and returns early without dispatching to the plugin system (lines 1051–1053).
While current tests pass "SINGLE_TAP" via StreamManager, the inconsistency risks silent failures: direct calls to AudioStreamClient.send_button_event() with the wrong strings will silently suppress plugin events without raising an error.
Change AudioStreamClient.send_button_event to use "SINGLE_TAP" / "DOUBLE_TAP" everywhere—default parameter (line 233), docstring (line 237)—to match the ButtonState enum and server handler.
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 240-240: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py`
around lines 233 - 244, The send_button_event method in AudioStreamClient uses
the wrong button state strings; update AudioStreamClient.send_button_event so
its default parameter, docstring, and any emitted payload use
"SINGLE_TAP"/"DOUBLE_TAP" (not "SINGLE_PRESS"/"DOUBLE_PRESS") to match the
server-side ButtonState enum and websocket_controller.py handler expectations;
ensure the header data["state"] value and the function signature/docstring are
changed consistently so the server will accept and dispatch the event.
backends/advanced/src/advanced_omi_backend/observability/otel_setup.py
Outdated
Show resolved
Hide resolved
| elif condition_type == "keyword_anywhere": | ||
| # Trigger when keyword appears anywhere in the transcript. | ||
| # Command is the transcript with the keyword removed. | ||
| transcript = data.get("transcript", "") | ||
| normalized_transcript = normalize_text_for_wake_word(transcript) | ||
|
|
||
| keywords = plugin.condition.get("keywords", []) | ||
| if not keywords: | ||
| keyword = plugin.condition.get("keyword", "") | ||
| if keyword: | ||
| keywords = [keyword] | ||
|
|
||
| for keyword in keywords: | ||
| normalized_keyword = normalize_text_for_wake_word(keyword) | ||
| if normalized_keyword and normalized_keyword in normalized_transcript: | ||
| command = extract_command_around_keyword(transcript, keyword) | ||
| logger.debug( | ||
| f"Keyword '{keyword}' found in transcript. " | ||
| f"Original: '{transcript}', Command: '{command}'" | ||
| ) | ||
| return ConditionResult( | ||
| execute=True, | ||
| extra={"command": command, "original_transcript": transcript}, | ||
| ) | ||
|
|
||
| return self._SKIP |
There was a problem hiding this comment.
keyword_anywhere: substring match can trigger on partial words.
Line 419 uses normalized_keyword in normalized_transcript, which is a plain substring check on normalized (lowercased, punctuation-stripped) text. A keyword like "ok" would match "booking", "cook", etc. Use word-boundary matching instead to prevent false positives.
Proposed fix: use word-boundary regex
for keyword in keywords:
normalized_keyword = normalize_text_for_wake_word(keyword)
- if normalized_keyword and normalized_keyword in normalized_transcript:
+ if normalized_keyword and re.search(
+ r"\b" + re.escape(normalized_keyword) + r"\b",
+ normalized_transcript,
+ ):
command = extract_command_around_keyword(transcript, keyword)Note: the same concern applies to wake_word matching on line 390 (startswith), but the prefix constraint makes false positives far less likely in practice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/advanced/src/advanced_omi_backend/plugins/router.py` around lines
405 - 430, The substring check in the "keyword_anywhere" branch (where
normalized_keyword in normalized_transcript) can match inside other words;
change it to a word-boundary regex match so only whole-word occurrences trigger;
i.e., use re.search with re.escape(normalized_keyword) wrapped in \b boundaries
against normalized_transcript, add an import for the re module if missing, and
keep the rest of the flow (call extract_command_around_keyword(transcript,
keyword), logging via logger, and returning ConditionResult or self._SKIP)
unchanged.
| # Gate: only set the flag when open_conversation_job is running and will read it. | ||
| # The conversation:current key is set right before the polling loop starts. | ||
| conversation_id = await self._async_redis.get( | ||
| f"conversation:current:{session_id}" | ||
| ) | ||
| if not conversation_id: | ||
| logger.warning( | ||
| f"No open conversation for session {session_id} — close request ignored" | ||
| ) | ||
| return False | ||
|
|
||
| from advanced_omi_backend.controllers.session_controller import ( | ||
| request_conversation_close, | ||
| ) | ||
|
|
||
| return await request_conversation_close(self._async_redis, session_id, reason=reason.value) | ||
| return await request_conversation_close( | ||
| self._async_redis, session_id, reason=reason.value | ||
| ) |
There was a problem hiding this comment.
Minor TOCTOU: conversation may close between the gate check and request_conversation_close.
The conversation:current:{session_id} key is checked at line 60 and then request_conversation_close sets a Redis flag at line 73. If the conversation closes (key deleted) in that window, an orphan close-flag is written. If request_conversation_close doesn't clean up that flag on its own path, it could be read by the next conversation on the same session.
This is a very low-frequency race and the gate still provides the intended protection for the common case; just ensure any orphan flags are either TTL-expired or cleared on conversation start.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/advanced/src/advanced_omi_backend/plugins/services.py` around lines
58 - 75, You have a TOCTOU where conversation:current:{session_id} is read in
this function but request_conversation_close later sets a close-flag and the
conversation may have changed in between; modify the flow so
request_conversation_close only writes the flag when it still targets the same
conversation (e.g., accept and verify an expected conversation_id or perform a
Redis WATCH/MULTI check inside request_conversation_close) or ensure the flag is
short-lived (SET with an expiry) and/or explicitly cleared when a new
conversation starts (in the open_conversation_job or conversation start
routine); update request_conversation_close and conversation start logic
accordingly to avoid orphan flags.
| async def get_queue_stats_endpoint( | ||
| current_user: User = Depends(current_active_user) | ||
| ): | ||
| async def get_queue_stats_endpoint(current_user: User = Depends(current_active_user)): |
There was a problem hiding this comment.
Missing superuser authorization check on sensitive endpoints
current_user is injected via Depends but never used in the bodies of get_queue_stats_endpoint, get_queue_worker_details, get_stream_stats, and get_redis_sessions. Any authenticated (non-admin) user can query internal queue metrics, worker/Redis connection details, all Redis stream contents with consumer group info, and session data for all users. This is flagged by Ruff (ARG001) on all four lines.
Each of these endpoints leaks operational internals or cross-user data. At minimum get_redis_sessions (which exposes user_id, client_id, stream state, etc. for every session) should be admin-only. Review whether the others also need a superuser gate:
async def get_queue_stats_endpoint(current_user: User = Depends(current_active_user)):
"""Get queue statistics."""
+ if not current_user.is_superuser:
+ raise HTTPException(status_code=403, detail="Admin access required")
try:Apply the same pattern to get_queue_worker_details (Line 459), get_stream_stats (Line 494), and get_redis_sessions (Line 901).
Also applies to: 459-459, 494-494, 901-901
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 439-439: Unused function argument: current_user
(ARG001)
[warning] 439-439: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py`
at line 439, These endpoints expose sensitive operational and cross-user data
without enforcing admin-only access; update get_queue_stats_endpoint,
get_queue_worker_details, get_stream_stats, and get_redis_sessions to enforce a
superuser check by using the injected current_user (from current_active_user)
and returning an HTTP 403 (raise HTTPException) if current_user.is_superuser is
falsy; add the same guard at the top of each function (use the existing
current_user param and consistent error message) so only admins can call these
handlers.
| # Progress callback: writes batch progress to RQ job.meta so the | ||
| # queue API and UI can show "Transcribing segment X of Y". | ||
| def _on_batch_progress(event: dict) -> None: | ||
| job = get_current_job() | ||
| if job: | ||
| current = event.get("current", 0) | ||
| total = event.get("total", 0) | ||
| job.meta["batch_progress"] = { | ||
| "current": current, | ||
| "total": total, | ||
| "percent": int(current / total * 100) if total else 0, | ||
| "message": f"Transcribing segment {current} of {total}", | ||
| } | ||
| job.save_meta() | ||
|
|
||
| # Transcribe the audio directly from memory (no disk I/O needed) | ||
| transcribe_kwargs: Dict[str, Any] = { | ||
| "audio_data": wav_data, | ||
| "sample_rate": actual_sample_rate, | ||
| "diarize": True, | ||
| "progress_callback": _on_batch_progress, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -nP 'def transcribe\b' --type=py -A 10 -g '!**/test*'Repository: SimpleOpenSoftware/chronicle
Length of output: 19732
🏁 Script executed:
cd backends/advanced/src/advanced_omi_backend/workers && cat -n transcription_jobs.py | sed -n '240,280p'Repository: SimpleOpenSoftware/chronicle
Length of output: 2049
🏁 Script executed:
grep -n "provider =" backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py | head -20Repository: SimpleOpenSoftware/chronicle
Length of output: 131
🏁 Script executed:
rg -n "def get_transcription_provider" --type=py -A 15Repository: SimpleOpenSoftware/chronicle
Length of output: 1958
🏁 Script executed:
rg -n "class.*TranscriptionProvider" --type=py -B 2 backends/advanced/src/advanced_omi_backend/services/transcription/Repository: SimpleOpenSoftware/chronicle
Length of output: 2192
🏁 Script executed:
sed -n '614,680p' backends/advanced/src/advanced_omi_backend/services/transcription/__init__.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 2136
🏁 Script executed:
sed -n '117,250p' backends/advanced/src/advanced_omi_backend/services/transcription/__init__.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 5523
🏁 Script executed:
sed -n '158,250p' backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py | grep -n "progress_callback"Repository: SimpleOpenSoftware/chronicle
Length of output: 105
🏁 Script executed:
sed -n '36,80p' backends/advanced/src/advanced_omi_backend/services/transcription/mock_provider.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 2660
MockTranscriptionProvider must accept progress_callback parameter to avoid TypeError.
When model_provider == "mock", the RegistryBatchTranscriptionProvider instantiates MockTranscriptionProvider and calls its transcribe() method. However, MockTranscriptionProvider.transcribe() accepts only audio_data, sample_rate, and diarize parameters—it does not accept progress_callback or use **kwargs. Passing progress_callback through **transcribe_kwargs will raise TypeError: transcribe() got an unexpected keyword argument 'progress_callback'.
Update MockTranscriptionProvider.transcribe() to accept progress_callback=None and **kwargs:
async def transcribe(self, audio_data: bytes, sample_rate: int, diarize: bool = False, progress_callback=None, **kwargs) -> dict:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py`
around lines 253 - 273, MockTranscriptionProvider.transcribe currently lacks a
progress_callback parameter so passing transcribe_kwargs (which contains
progress_callback) from RegistryBatchTranscriptionProvider will raise a
TypeError; update MockTranscriptionProvider.transcribe to accept
progress_callback=None and **kwargs (signature like async def transcribe(self,
audio_data, sample_rate, diarize=False, progress_callback=None, **kwargs)) so it
can receive and ignore the extra argument and remain compatible with
transcribe_kwargs usage in RegistryBatchTranscriptionProvider when
model_provider == "mock".
| session_id: str, | ||
| user_id: str, | ||
| client_id: str, | ||
| timeout_seconds: int = None, | ||
| *, | ||
| redis_client=None, | ||
| ) -> Dict[str, Any]: |
There was a problem hiding this comment.
timeout_seconds: int = None violates PEP 484 — use explicit Optional[int].
PEP 484 prohibits implicit Optional. Since typing is already imported and the parameter defaults to None, annotate it properly.
🧹 Proposed fix
async def transcription_fallback_check_job(
session_id: str,
user_id: str,
client_id: str,
- timeout_seconds: int = None,
+ timeout_seconds: int | None = None,
*,
redis_client=None,
) -> Dict[str, Any]:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| session_id: str, | |
| user_id: str, | |
| client_id: str, | |
| timeout_seconds: int = None, | |
| *, | |
| redis_client=None, | |
| ) -> Dict[str, Any]: | |
| session_id: str, | |
| user_id: str, | |
| client_id: str, | |
| timeout_seconds: int | None = None, | |
| *, | |
| redis_client=None, | |
| ) -> Dict[str, Any]: |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 677-677: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py`
around lines 674 - 680, Change the implicit Optional annotation on the
timeout_seconds parameter to an explicit one: replace "timeout_seconds: int =
None" with "timeout_seconds: Optional[int] = None" (or "typing.Optional[int]" if
you prefer), and ensure Optional is imported from typing; update the same
function signature that includes session_id, user_id, client_id,
timeout_seconds, and redis_client in transcription_jobs.py.
| @@ -1,2 +1,3 @@ | |||
| #!/bin/bash | |||
| source "$(dirname "$0")/scripts/check_uv.sh" | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find check_uv.sh and examine its content
fd -t f "check_uv.sh"Repository: SimpleOpenSoftware/chronicle
Length of output: 54
🏁 Script executed:
#!/bin/bash
# Find status.sh and examine its content
fd -t f "status.sh"Repository: SimpleOpenSoftware/chronicle
Length of output: 111
🏁 Script executed:
#!/bin/bash
# Read status.sh to see current content
cat -n status.shRepository: SimpleOpenSoftware/chronicle
Length of output: 221
🏁 Script executed:
#!/bin/bash
# List contents of scripts directory
ls -la scripts/Repository: SimpleOpenSoftware/chronicle
Length of output: 523
🏁 Script executed:
#!/bin/bash
# Search for check_uv.sh in the repository
find . -name "*check_uv*" -o -name "*uv*" | head -20Repository: SimpleOpenSoftware/chronicle
Length of output: 363
🏁 Script executed:
#!/bin/bash
# Search for any references to check_uv function or script
rg "check_uv" --type shRepository: SimpleOpenSoftware/chronicle
Length of output: 344
🏁 Script executed:
#!/bin/bash
# Check if there are any git history or references to check_uv.sh
git log --all --pretty=format: --name-only --diff-filter=D | grep -i "check_uv" || echo "Not found in deleted files"Repository: SimpleOpenSoftware/chronicle
Length of output: 98
🏁 Script executed:
#!/bin/bash
# Check if set -e is used in status.sh
grep -n "set -e" status.sh || echo "set -e not found"Repository: SimpleOpenSoftware/chronicle
Length of output: 88
Add || exit 1 to guard against missing check_uv.sh
The file scripts/check_uv.sh does not exist in the repository, causing source to fail silently on line 2. Without set -e or an explicit error handler, the script continues and executes uv run even after the source command fails.
🔧 Suggested fix
-source "$(dirname "$0")/scripts/check_uv.sh"
+source "$(dirname "$0")/scripts/check_uv.sh" || exit 1This same issue affects start.sh, restart.sh, stop.sh, and wizard.sh, which all source the same missing file.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| source "$(dirname "$0")/scripts/check_uv.sh" | |
| source "$(dirname "$0")/scripts/check_uv.sh" || exit 1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@status.sh` at line 2, The source command for scripts/check_uv.sh can fail
silently; update the source invocation in status.sh (the line sourcing
"scripts/check_uv.sh") to append "|| exit 1" so the script aborts if the file is
missing, and apply the same change to start.sh, restart.sh, stop.sh and
wizard.sh to guard against executing uv run when check_uv.sh is absent; ensure
you modify the exact source lines that reference "scripts/check_uv.sh" rather
than other lines.
| test_button_actions: | ||
| enabled: true | ||
| events: | ||
| - button.single_press | ||
| - button.double_press | ||
| condition: | ||
| type: always | ||
| actions: | ||
| single_press: | ||
| type: close_conversation |
There was a problem hiding this comment.
button.double_press event subscribed but no double_press action defined.
The plugin subscribes to button.double_press, but actions: only contains single_press. Depending on how the plugin handles a missing action key, a double_press event during tests could raise a KeyError or produce a silent no-op. Either add a double_press action or remove the event subscription if it isn't needed for the current test suite.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/config/plugins.test.yml` around lines 16 - 25, The test config for
test_button_actions subscribes to events including button.double_press but only
defines an actions mapping for single_press; either add a double_press action
entry under actions (e.g., define double_press: type: close_conversation or
whatever behavior is expected) or remove "button.double_press" from the events
list so the events array matches the actions map (refer to the
test_button_actions block, events -> button.double_press, and actions ->
single_press/double_press).
| except subprocess.TimeoutExpired as e: | ||
| console.print(f"❌ {service_name} setup timed out after {e.timeout}s") | ||
| console.print(f"[yellow] Configuration may be partially written.[/yellow]") | ||
| console.print(f"[yellow] To retry just this service:[/yellow]") | ||
| console.print(f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]") | ||
| console.print( | ||
| f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]" | ||
| ) | ||
| return False | ||
| except subprocess.CalledProcessError as e: | ||
| console.print(f"❌ {service_name} setup failed with exit code {e.returncode}") | ||
| console.print(f"[yellow] Check the error output above for details.[/yellow]") | ||
| console.print(f"[yellow] To retry just this service:[/yellow]") | ||
| console.print(f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]") | ||
| console.print( | ||
| f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]" | ||
| ) |
There was a problem hiding this comment.
Retry hint shows the base command, not the fully-assembled cmd with all extra flags
service['cmd'] is the bare entry from the SERVICES dict. By the time execution reaches the except blocks, cmd already has all context-sensitive arguments appended (e.g., --transcription-provider, --hf-token, --speaker-service-url, --deepgram-api-key, --enable-https, etc.). Showing service['cmd'] in the retry hint will mislead the user into running an incomplete invocation.
🐛 Proposed fix (applies to both TimeoutExpired and CalledProcessError handlers)
except subprocess.TimeoutExpired as e:
console.print(f"❌ {service_name} setup timed out after {e.timeout}s")
console.print(f"[yellow] Configuration may be partially written.[/yellow]")
console.print(f"[yellow] To retry just this service:[/yellow]")
console.print(
- f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]"
+ f"[yellow] cd {service['path']} && {' '.join(cmd)}[/yellow]"
)
return False
except subprocess.CalledProcessError as e:
console.print(f"❌ {service_name} setup failed with exit code {e.returncode}")
console.print(f"[yellow] Check the error output above for details.[/yellow]")
console.print(f"[yellow] To retry just this service:[/yellow]")
console.print(
- f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]"
+ f"[yellow] cd {service['path']} && {' '.join(cmd)}[/yellow]"
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except subprocess.TimeoutExpired as e: | |
| console.print(f"❌ {service_name} setup timed out after {e.timeout}s") | |
| console.print(f"[yellow] Configuration may be partially written.[/yellow]") | |
| console.print(f"[yellow] To retry just this service:[/yellow]") | |
| console.print(f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]") | |
| console.print( | |
| f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]" | |
| ) | |
| return False | |
| except subprocess.CalledProcessError as e: | |
| console.print(f"❌ {service_name} setup failed with exit code {e.returncode}") | |
| console.print(f"[yellow] Check the error output above for details.[/yellow]") | |
| console.print(f"[yellow] To retry just this service:[/yellow]") | |
| console.print(f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]") | |
| console.print( | |
| f"[yellow] cd {service['path']} && {' '.join(service['cmd'])}[/yellow]" | |
| ) | |
| except subprocess.TimeoutExpired as e: | |
| console.print(f"❌ {service_name} setup timed out after {e.timeout}s") | |
| console.print(f"[yellow] Configuration may be partially written.[/yellow]") | |
| console.print(f"[yellow] To retry just this service:[/yellow]") | |
| console.print( | |
| f"[yellow] cd {service['path']} && {' '.join(cmd)}[/yellow]" | |
| ) | |
| return False | |
| except subprocess.CalledProcessError as e: | |
| console.print(f"❌ {service_name} setup failed with exit code {e.returncode}") | |
| console.print(f"[yellow] Check the error output above for details.[/yellow]") | |
| console.print(f"[yellow] To retry just this service:[/yellow]") | |
| console.print( | |
| f"[yellow] cd {service['path']} && {' '.join(cmd)}[/yellow]" | |
| ) |
🧰 Tools
🪛 Ruff (0.15.1)
[error] 406-406: f-string without any placeholders
Remove extraneous f prefix
(F541)
[error] 407-407: f-string without any placeholders
Remove extraneous f prefix
(F541)
[error] 414-414: f-string without any placeholders
Remove extraneous f prefix
(F541)
[error] 415-415: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@wizard.py` around lines 404 - 418, The retry hint incorrectly prints the base
template service['cmd'] instead of the actual assembled command used at runtime;
update both exception handlers (the subprocess.TimeoutExpired and
subprocess.CalledProcessError blocks) to print the runtime command variable (the
assembled cmd list/string used to invoke subprocess) rather than service['cmd'],
and format/join that assembled cmd the same way currently done (e.g., '
'.join(cmd) or the variable name used where the process was started) so the
retry hint shows the full invocation with all appended flags.
|
| Metric | Count |
|---|---|
| ✅ Passed | 105 |
| ❌ Failed | 16 |
| 📊 Total | 121 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
- Changed button state terminology from `SINGLE_TAP` and `DOUBLE_TAP` to `SINGLE_PRESS` and `DOUBLE_PRESS` across various files, including documentation and code implementations. - Enhanced the `send_button_event` method to reflect the updated button state values, ensuring consistency in event handling. - Introduced new methods for managing button events in the plugin architecture, improving the overall interaction with device buttons. - Updated tests to align with the new button state definitions, ensuring robust coverage for the updated functionality.
|
| Metric | Count |
|---|---|
| ✅ Passed | 105 |
| ❌ Failed | 16 |
| 📊 Total | 121 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
- Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. - Enhanced the `Makefile` to include a new target for running specific tests by name, tag, or file, improving test execution flexibility. - Created a shared prerequisite check script `check_uv.sh` to ensure the `uv` package manager is installed before running scripts, enhancing setup reliability.
- Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds.
- Updated session retrieval logic in `queue_routes.py` to ensure proper closure of Redis connections using `await redis_client.aclose()`, improving resource management. - Enhanced error handling during session data retrieval, providing clearer logging for issues encountered while fetching session information. - Streamlined the session key scanning process, maintaining existing functionality while improving code readability and maintainability. - Added optional parameters to the `transcribe` method in `mock_provider.py` for better flexibility in handling context information and progress callbacks during transcription tasks.
|
| Metric | Count |
|---|---|
| ✅ Passed | 105 |
| ❌ Failed | 16 |
| 📊 Total | 121 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
* Enhance Docker Compose and service management for LangFuse integration - Updated `services.py` to include new options for service management, allowing for forced recreation of containers during startup. - Added LangFuse configuration options in the setup wizard, improving user experience for observability setup. - Introduced new API endpoints for retrieving observability configuration, enhancing integration with the frontend. - Enhanced error handling and logging for service startup processes, ensuring better visibility of configuration issues. - Updated documentation to reflect changes in service management and LangFuse integration. * Enhance README and add new ASR service configurations - Updated README.md to include a new section for the Desktop Menu Bar Client with an accompanying screenshot. - Added configuration options for the new `galileo` ASR provider in `pyproject.toml` and `uv.lock`, enhancing support for additional audio processing capabilities. - Modified Dockerfile to include `galileo` as an extra dependency for both main and test environments, improving service management. - Enhanced job handling in `queue_controller.py` to track batch progress for transcription jobs, providing better user feedback during processing. - Updated Queue.tsx to display batch progress for audio transcription jobs, improving user experience in the web interface. - Refactored System.tsx to allow for restarting both workers and backend services, enhancing service management capabilities. * Refactor pre-commit configuration and enhance development scripts (#309) * Refactor pre-commit configuration and enhance development scripts - Removed local hooks for Robot Framework tests and cleanup from `.pre-commit-config.yaml`, streamlining the pre-commit setup. - Updated `Makefile` to install pre-commit using the `uv` tool, improving dependency management. - Enhanced `restart.sh`, `start.sh`, `status.sh`, and `stop.sh` scripts to source a new `check_uv.sh` script for better environment validation. - Added new environment variables for Galileo observability in `.env.template`, improving observability setup. - Introduced OpenTelemetry initialization in `app_factory.py` for enhanced observability during application runtime. * Update button event handling and plugin architecture - Changed button state terminology from `SINGLE_TAP` and `DOUBLE_TAP` to `SINGLE_PRESS` and `DOUBLE_PRESS` across various files, including documentation and code implementations. - Enhanced the `send_button_event` method to reflect the updated button state values, ensuring consistency in event handling. - Introduced new methods for managing button events in the plugin architecture, improving the overall interaction with device buttons. - Updated tests to align with the new button state definitions, ensuring robust coverage for the updated functionality. * Add unit tests for Qwen3-ASR output parsing and repetition detection - Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. - Enhanced the `Makefile` to include a new target for running specific tests by name, tag, or file, improving test execution flexibility. - Created a shared prerequisite check script `check_uv.sh` to ensure the `uv` package manager is installed before running scripts, enhancing setup reliability. * Add unit tests for Qwen3-ASR output parsing and repetition detection - Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. * Refactor Redis session handling and enhance error management - Updated session retrieval logic in `queue_routes.py` to ensure proper closure of Redis connections using `await redis_client.aclose()`, improving resource management. - Enhanced error handling during session data retrieval, providing clearer logging for issues encountered while fetching session information. - Streamlined the session key scanning process, maintaining existing functionality while improving code readability and maintainability. - Added optional parameters to the `transcribe` method in `mock_provider.py` for better flexibility in handling context information and progress callbacks during transcription tasks. * Refactor test workflows to utilize Makefile targets and enhance documentation - Replaced the `run-no-api-tests.sh` script with a Makefile target `make test-no-api` for executing tests without API keys, streamlining the testing process. - Updated GitHub Actions workflows and README documentation to reflect the new Makefile usage, improving clarity for contributors. - Removed the deprecated `run-no-api-tests.sh` script to reduce redundancy and simplify the codebase.
* Enhance Docker Compose and service management for LangFuse integration - Updated `services.py` to include new options for service management, allowing for forced recreation of containers during startup. - Added LangFuse configuration options in the setup wizard, improving user experience for observability setup. - Introduced new API endpoints for retrieving observability configuration, enhancing integration with the frontend. - Enhanced error handling and logging for service startup processes, ensuring better visibility of configuration issues. - Updated documentation to reflect changes in service management and LangFuse integration. * Enhance README and add new ASR service configurations - Updated README.md to include a new section for the Desktop Menu Bar Client with an accompanying screenshot. - Added configuration options for the new `galileo` ASR provider in `pyproject.toml` and `uv.lock`, enhancing support for additional audio processing capabilities. - Modified Dockerfile to include `galileo` as an extra dependency for both main and test environments, improving service management. - Enhanced job handling in `queue_controller.py` to track batch progress for transcription jobs, providing better user feedback during processing. - Updated Queue.tsx to display batch progress for audio transcription jobs, improving user experience in the web interface. - Refactored System.tsx to allow for restarting both workers and backend services, enhancing service management capabilities. * Refactor pre-commit configuration and enhance development scripts (#309) * Refactor pre-commit configuration and enhance development scripts - Removed local hooks for Robot Framework tests and cleanup from `.pre-commit-config.yaml`, streamlining the pre-commit setup. - Updated `Makefile` to install pre-commit using the `uv` tool, improving dependency management. - Enhanced `restart.sh`, `start.sh`, `status.sh`, and `stop.sh` scripts to source a new `check_uv.sh` script for better environment validation. - Added new environment variables for Galileo observability in `.env.template`, improving observability setup. - Introduced OpenTelemetry initialization in `app_factory.py` for enhanced observability during application runtime. * Update button event handling and plugin architecture - Changed button state terminology from `SINGLE_TAP` and `DOUBLE_TAP` to `SINGLE_PRESS` and `DOUBLE_PRESS` across various files, including documentation and code implementations. - Enhanced the `send_button_event` method to reflect the updated button state values, ensuring consistency in event handling. - Introduced new methods for managing button events in the plugin architecture, improving the overall interaction with device buttons. - Updated tests to align with the new button state definitions, ensuring robust coverage for the updated functionality. * Add unit tests for Qwen3-ASR output parsing and repetition detection - Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. - Enhanced the `Makefile` to include a new target for running specific tests by name, tag, or file, improving test execution flexibility. - Created a shared prerequisite check script `check_uv.sh` to ensure the `uv` package manager is installed before running scripts, enhancing setup reliability. * Add unit tests for Qwen3-ASR output parsing and repetition detection - Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. * Refactor Redis session handling and enhance error management - Updated session retrieval logic in `queue_routes.py` to ensure proper closure of Redis connections using `await redis_client.aclose()`, improving resource management. - Enhanced error handling during session data retrieval, providing clearer logging for issues encountered while fetching session information. - Streamlined the session key scanning process, maintaining existing functionality while improving code readability and maintainability. - Added optional parameters to the `transcribe` method in `mock_provider.py` for better flexibility in handling context information and progress callbacks during transcription tasks. * Refactor test workflows to utilize Makefile targets and enhance documentation - Replaced the `run-no-api-tests.sh` script with a Makefile target `make test-no-api` for executing tests without API keys, streamlining the testing process. - Updated GitHub Actions workflows and README documentation to reflect the new Makefile usage, improving clarity for contributors. - Removed the deprecated `run-no-api-tests.sh` script to reduce redundancy and simplify the codebase.
* Refactor pre-commit configuration and enhance development scripts - Removed local hooks for Robot Framework tests and cleanup from `.pre-commit-config.yaml`, streamlining the pre-commit setup. - Updated `Makefile` to install pre-commit using the `uv` tool, improving dependency management. - Enhanced `restart.sh`, `start.sh`, `status.sh`, and `stop.sh` scripts to source a new `check_uv.sh` script for better environment validation. - Added new environment variables for Galileo observability in `.env.template`, improving observability setup. - Introduced OpenTelemetry initialization in `app_factory.py` for enhanced observability during application runtime. * Update button event handling and plugin architecture - Changed button state terminology from `SINGLE_TAP` and `DOUBLE_TAP` to `SINGLE_PRESS` and `DOUBLE_PRESS` across various files, including documentation and code implementations. - Enhanced the `send_button_event` method to reflect the updated button state values, ensuring consistency in event handling. - Introduced new methods for managing button events in the plugin architecture, improving the overall interaction with device buttons. - Updated tests to align with the new button state definitions, ensuring robust coverage for the updated functionality. * Add unit tests for Qwen3-ASR output parsing and repetition detection - Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. - Enhanced the `Makefile` to include a new target for running specific tests by name, tag, or file, improving test execution flexibility. - Created a shared prerequisite check script `check_uv.sh` to ensure the `uv` package manager is installed before running scripts, enhancing setup reliability. * Add unit tests for Qwen3-ASR output parsing and repetition detection - Introduced a new test file `test_qwen3_asr_parsing.py` to validate the functionality of the `_parse_qwen3_output` and `detect_and_fix_repetitions` methods. - Implemented various test cases covering standard and edge cases for ASR output parsing, including language detection, handling of empty inputs, and unexpected text. - Added tests for repetition detection to ensure proper functionality based on specified thresholds. * Refactor Redis session handling and enhance error management - Updated session retrieval logic in `queue_routes.py` to ensure proper closure of Redis connections using `await redis_client.aclose()`, improving resource management. - Enhanced error handling during session data retrieval, providing clearer logging for issues encountered while fetching session information. - Streamlined the session key scanning process, maintaining existing functionality while improving code readability and maintainability. - Added optional parameters to the `transcribe` method in `mock_provider.py` for better flexibility in handling context information and progress callbacks during transcription tasks.
.pre-commit-config.yaml, streamlining the pre-commit setup.Makefileto install pre-commit using theuvtool, improving dependency management.restart.sh,start.sh,status.sh, andstop.shscripts to source a newcheck_uv.shscript for better environment validation..env.template, improving observability setup.app_factory.pyfor enhanced observability during application runtime.Summary by CodeRabbit
Release Notes
New Features
Improvements
Testing